Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros










Intervalo de ano de publicação
1.
Eur J Hum Genet ; 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565638

RESUMO

The advent of single-cell resolution sequencing and spatial transcriptomics has enabled the delivery of cellular and molecular atlases of tissues and organs, providing new insights into tissue health and disease. However, if the full potential of these technologies is to be equitably realised, ancestrally inclusivity is paramount. Such a goal requires greater inclusion of both researchers and donors in low- and middle-income countries (LMICs). In this perspective, we describe the current landscape of ancestral inclusivity in genomic and single-cell transcriptomic studies. We discuss the collaborative efforts needed to scale the barriers to establishing, expanding, and adopting single-cell sequencing research in LMICs and to enable globally impactful outcomes of these technologies.

2.
BMC Med Ethics ; 25(1): 6, 2024 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-38184595

RESUMO

BACKGROUND: Given that AI-driven decision support systems (AI-DSS) are intended to assist in medical decision making, it is essential that clinicians are willing to incorporate AI-DSS into their practice. This study takes as a case study the use of AI-driven cardiotography (CTG), a type of AI-DSS, in the context of intrapartum care. Focusing on the perspectives of obstetricians and midwives regarding the ethical and trust-related issues of incorporating AI-driven tools in their practice, this paper explores the conditions that AI-driven CTG must fulfill for clinicians to feel justified in incorporating this assistive technology into their decision-making processes regarding interventions in labor. METHODS: This study is based on semi-structured interviews conducted online with eight obstetricians and five midwives based in England. Participants were asked about their current decision-making processes about when to intervene in labor, how AI-driven CTG might enhance or disrupt this process, and what it would take for them to trust this kind of technology. Interviews were transcribed verbatim and analyzed with thematic analysis. NVivo software was used to organize thematic codes that recurred in interviews to identify the issues that mattered most to participants. Topics and themes that were repeated across interviews were identified to form the basis of the analysis and conclusions of this paper. RESULTS: There were four major themes that emerged from our interviews with obstetricians and midwives regarding the conditions that AI-driven CTG must fulfill: (1) the importance of accurate and efficient risk assessments; (2) the capacity for personalization and individualized medicine; (3) the lack of significance regarding the type of institution that develops technology; and (4) the need for transparency in the development process. CONCLUSIONS: Accuracy, efficiency, personalization abilities, transparency, and clear evidence that it can improve outcomes are conditions that clinicians deem necessary for AI-DSS to meet in order to be considered reliable and therefore worthy of being incorporated into the decision-making process. Importantly, healthcare professionals considered themselves as the epistemic authorities in the clinical context and the bearers of responsibility for delivering appropriate care. Therefore, what mattered to them was being able to evaluate the reliability of AI-DSS on their own terms, and have confidence in implementing them in their practice.


Assuntos
Tocologia , Humanos , Gravidez , Feminino , Obstetra , Reprodutibilidade dos Testes , Tomada de Decisão Clínica , Inteligência Artificial
4.
SSM Qual Res Health ; 3: 100240, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37426704

RESUMO

Computational phenotyping (CP) technology uses facial recognition algorithms to classify and potentially diagnose rare genetic disorders on the basis of digitised facial images. This AI technology has a number of research as well as clinical applications, such as supporting diagnostic decision-making. Using the example of CP, we examine stakeholders' views of the benefits and costs of using AI as a diagnostic tool within the clinic. Through a series of in-depth interviews (n â€‹= â€‹20) with: clinicians, clinical researchers, data scientists, industry and support group representatives, we report stakeholder views regarding the adoption of this technology in a clinical setting. While most interviewees were supportive of employing CP as a diagnostic tool in some capacity we observed ambivalence around the potential for artificial intelligence to overcome diagnostic uncertainty in a clinical context. Thus, while there was widespread agreement amongst interviewees concerning the public benefits of AI assisted diagnosis, namely, its potential to increase diagnostic yield and enable faster more objective and accurate diagnoses by up skilling non specialists and thereby enabling access to diagnosis that is potentially lacking, interviewees also raised concerns about ensuring algorithmic reliability, expunging algorithmic bias and that the use of AI could result in deskilling the specialist clinical workforce. We conclude that, prior to widespread clinical implementation, on-going reflection is needed regarding the trade-offs required to determine acceptable levels of bias and conclude that diagnostic AI tools should only be employed as an assistive technology within the dysmorphology clinic.

5.
BMC Med Ethics ; 24(1): 51, 2023 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-37452393

RESUMO

It is widely acknowledged that trust plays an important role for the acceptability of data sharing practices in research and healthcare, and for the adoption of new health technologies such as AI. Yet there is reported distrust in this domain. Although in the UK, the NHS is one of the most trusted public institutions, public trust does not appear to accompany its data sharing practices for research and innovation, specifically with the private sector, that have been introduced in recent years. In this paper, we examine the question of, what is it about sharing NHS data for research and innovation with for-profit companies that challenges public trust? To address this question, we draw from political theory to provide an account of public trust that helps better understand the relationship between the public and the NHS within a democratic context, as well as, the kind of obligations and expectations that govern this relationship. Then we examine whether the way in which the NHS is managing patient data and its collaboration with the private sector fit under this trust-based relationship. We argue that the datafication of healthcare and the broader 'health and wealth' agenda adopted by consecutive UK governments represent a major shift in the institutional character of the NHS, which brings into question the meaning of public good the NHS is expected to provide, challenging public trust. We conclude by suggesting that to address the problem of public trust, a theoretical and empirical examination of the benefits but also the costs associated with this shift needs to take place, as well as an open conversation at public level to determine what values should be promoted by a public institution like the NHS.


Assuntos
Medicina Estatal , Confiança , Humanos , Pesquisa Qualitativa , Atenção à Saúde
6.
BMC Med Ethics ; 24(1): 42, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37340408

RESUMO

BACKGROUND: Despite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness. METHODS: Seventeen semi-structured interviews were conducted with birth parents and mothers based on a speculative case study. Interviewees were based in England and were pregnant and/or had given birth in the last two years. Thematic analysis was used to analyze transcribed interviews with the use of NVivo. Major recurring themes acted as the basis for identifying the values most important to this population group for evaluating the trustworthiness of AI. RESULTS: Three themes pertaining to the perceived trustworthiness of AI emerged from interviews: (1) trustworthy AI-developing institutions, (2) trustworthy data from which AI is built, and (3) trustworthy decisions made with the assistance of AI. We found that birth parents and mothers trusted public institutions over private companies to develop AI, that they evaluated the trustworthiness of data by how representative it is of all population groups, and that they perceived trustworthy decisions as being mediated by humans even when supported by AI. CONCLUSIONS: The ethical values that underscore birth parents and mothers' perceptions of trustworthy AI include fairness and reliability, as well as practices like patient-centered care, the promotion of publicly funded healthcare, holistic care, and personalized medicine. Ultimately, these are also the ethical values that people want to protect in the healthcare system. Therefore, trustworthy AI is best understood not as a list of design features but in relation to how it undermines or promotes the ethical values that matter most to its end users. An ethical commitment to these values when creating AI in healthcare contexts opens up new challenges and possibilities for the design and deployment of AI.


Assuntos
Terapia de Aceitação e Compromisso , Inteligência Artificial , Feminino , Gravidez , Humanos , Opinião Pública , Reprodutibilidade dos Testes , Inglaterra
7.
BMC Med Inform Decis Mak ; 23(1): 73, 2023 04 20.
Artigo em Inglês | MEDLINE | ID: mdl-37081503

RESUMO

Artificial intelligence (AI) is often cited as a possible solution to current issues faced by healthcare systems. This includes the freeing up of time for doctors and facilitating person-centred doctor-patient relationships. However, given the novelty of artificial intelligence tools, there is very little concrete evidence on their impact on the doctor-patient relationship or on how to ensure that they are implemented in a way which is beneficial for person-centred care.Given the importance of empathy and compassion in the practice of person-centred care, we conducted a literature review to explore how AI impacts these two values. Besides empathy and compassion, shared decision-making, and trust relationships emerged as key values in the reviewed papers. We identified two concrete ways which can help ensure that the use of AI tools have a positive impact on person-centred doctor-patient relationships. These are (1) using AI tools in an assistive role and (2) adapting medical education. The study suggests that we need to take intentional steps in order to ensure that the deployment of AI tools in healthcare has a positive impact on person-centred doctor-patient relationships. We argue that the proposed solutions are contingent upon clarifying the values underlying future healthcare systems.


Assuntos
Inteligência Artificial , Relações Médico-Paciente , Humanos , Empatia , Assistência Centrada no Paciente , Confiança
8.
Eur J Hum Genet ; 31(6): 687-695, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36949262

RESUMO

An increasing number of European research projects return, or plan to return, individual genomic research results (IRR) to participants. While data access is a data subject's right under the General Data Protection Regulation (GDPR), and many legal and ethical guidelines allow or require participants to receive personal data generated in research, the practice of returning results is not straightforward and raises several practical and ethical issues. Existing guidelines focusing on return of IRR are mostly project-specific, only discuss which results to return, or were developed outside Europe. To address this gap, we analysed existing normative documents identified online using inductive content analysis. We used this analysis to develop a checklist of steps to assist European researchers considering whether to return IRR to participants. We then sought feedback on the checklist from an interdisciplinary panel of European experts (clinicians, clinical researchers, population-based researchers, biobank managers, ethicists, lawyers and policy makers) to refine the checklist. The checklist outlines seven major components researchers should consider when determining whether, and how, to return results to adult research participants: 1) Decide which results to return; 2) Develop a plan for return of results; 3) Obtain participant informed consent; 4) Collect and analyse data; 5) Confirm results; 6) Disclose research results; 7) Follow-up and monitor. Our checklist provides a clear outline of the steps European researchers can follow to develop ethical and sustainable result return pathways within their own research projects. Further legal analysis is required to ensure this checklist complies with relevant domestic laws.


Assuntos
Lista de Checagem , Consentimento Livre e Esclarecido , Humanos , Europa (Continente) , Genômica , Inquéritos e Questionários
9.
BMC Med Ethics ; 23(1): 112, 2022 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-36384545

RESUMO

BACKGROUND: As the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning. METHODS: Semi-structured qualitative interviews (n = 20) with stakeholders (clinical geneticists, data scientists, bioinformaticians, industry and patient support group spokespersons) who design and/or work with computational phenotyping (CP) systems. The method of constant comparison was used to analyse the interview data. RESULTS: Interviewees emphasized the importance of establishing trust in the use of CP technology in identifying rare diseases. Trust was formulated in two interrelated ways in these data. First, interviewees talked about the importance of using CP tools within the context of a trust relationship; arguing that patients will need to trust clinicians who use AI tools and that clinicians will need to trust AI developers, if they are to adopt this technology. Second, they described a need to establish trust in the technology itself, or in the knowledge it provides-epistemic trust. Interviewees suggested CP tools used for the diagnosis of rare diseases might be perceived as more trustworthy if the user is able to vouchsafe for the technology's reliability and accuracy and the person using/developing them is trusted. CONCLUSION: This study suggests we need to take deliberate and meticulous steps to design reliable or confidence-worthy AI systems for use in healthcare. In addition, we need to devise reliable or confidence-worthy processes that would give rise to reliable systems; these could take the form of RCTs and/or systems of accountability transparency and responsibility that would signify the epistemic trustworthiness of these tools. words 294.


Assuntos
Doenças Raras , Confiança , Humanos , Doenças Raras/diagnóstico , Reprodutibilidade dos Testes , Aprendizado de Máquina , Algoritmos
11.
BMC Med Ethics ; 23(1): 37, 2022 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-35387625

RESUMO

BACKGROUND: Research proactively and deliberately aims to bring about specific changes to how societies function and individual lives fare. However, in the ever-expanding field of ethical regulations and guidance for researchers, one ethical consideration seems to have passed under the radar: How should researchers act when pursuing actual, societal changes based on their academic work? MAIN TEXT: When researchers engage in the process of bringing about societal impact to tackle local or global challenges important concerns arise: cultural, social and political values and institutions can be put at risk, transformed or even hampered if researchers lack awareness of how their 'acting to impact' influences the social world. With today's strong focus on research impacts, addressing such ethical challenges has become urgent within in all fields of research involved in finding solutions to the challenges societies are facing. Due to the overall goal of doing something good that is often inherent in ethical approaches, boundaries to researchers' impact of something good is neither obvious, nor easy to detect. We suggest that it is time for the field of bioethics to explore normative boundaries for researchers' pursuit of impact and to consider, in detail, the ethical obligations that ought to shape this process, and we provide a four-step framework of fair conditions for such an approach. Our suggested approach within this field can be useful for other fields of research as well. CONCLUSION: With this paper, we draw attention to how the transition from pursuing impact within the Academy to trying to initiate and achieve impact beyond the Academy ought to be configured, and the ethical challenges inherent in this transition. We suggest a stepwise strategy to identify, discuss and constitute consensus-based boundaries to this academic activity. This strategy calls for efforts from a multi-disciplinary team of researchers, advisors from the humanities and social sciences, as well as discussants from funding institutions, ethical committees, politics and the society in general. Such efforts should be able to offer new and useful assistance to researchers, as well as research funding agencies, in choosing ethically acceptable, impact-pursuing projects.


Assuntos
Bioética , Ciências Humanas , Humanos , Princípios Morais , Pesquisadores , Ciências Sociais
12.
J Med Ethics ; 48(11): 852-856, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34426519

RESUMO

Artificial intelligence (AI) is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust as the basis upon which a relationship between this new technology and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, we argue that efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.


Assuntos
Inteligência Artificial , Medicina , Humanos , Confiança , Atenção à Saúde
13.
J Oral Biol Craniofac Res ; 11(4): 612-614, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34567966

RESUMO

AI has the potential to disrupt and transform the way we deliver care globally. It is reputed to be able to improve the accuracy of diagnoses and treatments, and make the provision of services more efficient and effective. In surgery, AI systems could lead to more accurate diagnoses of health problems and help surgeons better care for their patients. In the context of lower-and-middle-income-countries (LMICs), where access to healthcare still remains a global problem, AI could facilitate access to healthcare professionals and services, even specialist services, for millions of people. The ability of AI to deliver on its promises, however, depends on successfully resolving the ethical and practical issues identified, including that of explainability and algorithmic bias. Even though such issues might appear as being merely practical or technical ones, their closer examination uncovers questions of value, fairness and trust. It should not be left to AI developers, being research institutions or global tech companies, to decide how to resolve these ethical questions. Particularly, relying only on the trustworthiness of companies and institutions to address ethical issues relating to justice, fairness and health equality would be unsuitable and unwise. The pathway to a fair, appropriate and relevant AI necessitates the development, and critically, successful implementation of national and international rules and regulations that define the parameters and set the boundaries of operation and engagement.

14.
J Med Ethics ; 47(10): 689-696, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33441306

RESUMO

A rapidly growing proportion of health research uses 'secondary data': data used for purposes other than those for which it was originally collected. Do researchers using secondary data have an obligation to disclose individual research findings to participants? While the importance of this question has been duly recognised in the context of primary research (ie, where data are collected from participants directly), it remains largely unexamined in the context of research using secondary data. In this paper, we critically examine the arguments for a moral obligation to disclose individual research findings in the context of primary research, to determine if they can be applied to secondary research. We conclude that they cannot. We then propose that the nature of the relationship between researchers and participants is what gives rise to particular moral obligations, including the obligation to disclose individual results. We argue that the relationship between researchers and participants in secondary research does not generate an obligation to disclose. However, we also argue that the biobanks or data archives which collect and provide access to secondary data may have such an obligation, depending on the nature of the relationship they establish with participants.


Assuntos
Obrigações Morais , Pesquisadores , Humanos
15.
BMC Med Ethics ; 21(1): 110, 2020 11 03.
Artigo em Inglês | MEDLINE | ID: mdl-33143692

RESUMO

BACKGROUND: In the UK, the solidaristic character of the NHS makes it one of the most trusted public institutions. In recent years, the introduction of data-driven technologies in healthcare has opened up the space for collaborations with private digital companies seeking access to patient data. However, these collaborations appear to challenge the public's trust in the. MAIN TEXT: In this paper we explore how the opening of the healthcare sector to private digital companies challenges the existing social contract and the NHS's solidaristic character, and impacts on public trust. We start by critically discussing different examples of partnerships between the NHS and private companies that collect and use data. We then analyse the relationship between trust and solidarity, and investigate how this relationship changes in the context of digital companies entering the healthcare system. Finally, we show ways for the NHS to maintain public trust by putting in place a solidarity grounded partnership model with companies seeking to access patient data. Such a model would need to serve collective interests through, for example, securing preferential access to goods and services, providing health benefits, and monitoring data access. CONCLUSION: A solidarity grounded partnership model will help establish a social contract or licence that responds to the public's expectations and to principles of a solidaristic healthcare system.


Assuntos
Atenção à Saúde , Confiança , Humanos
17.
J Med Ethics ; 2020 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-32709754

RESUMO

Medicine is not merely a job that requires technical expertise, but a profession concerned with making the best decisions and recommendations with reference to, and in consultation with, the patient. This means that the skill set required for healthcare professionals in order to provide good care is a combination of scientific knowledge, technical aptitude, and affective qualities or virtues such as compassion and empathy.

18.
Bull World Health Organ ; 98(4): 245-250, 2020 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-32284647

RESUMO

Empathy, compassion and trust are fundamental values of a patient-centred, relational model of health care. In recent years, the quest for greater efficiency in health care, including economic efficiency, has often resulted in the side-lining of these values, making it difficult for health-care professionals to incorporate them in practice. Artificial intelligence is increasingly being used in health care. This technology promises greater efficiency and more free time for health-care professionals to focus on the human side of care, including fostering trust relationships and engaging with patients with empathy and compassion. This article considers the vision of efficient, empathetic and trustworthy health care put forward by the proponents of artificial intelligence. The paper suggests that artificial intelligence has the potential to fundamentally alter the way in which empathy, compassion and trust are currently regarded and practised in health care. Moving forward, it is important to re-evaluate whether and how these values could be incorporated and practised within a health-care system where artificial intelligence is increasingly used. Most importantly, society needs to re-examine what kind of health care it ought to promote.


L'empathie, la compassion et la confiance sont des valeurs fondamentales d'un modèle de soins de santé centré sur les relations avec le patient. Mais ces dernières années, la quête d'efficacité dans le secteur, y compris au niveau économique, a souvent relégué ces valeurs au second plan et les professionnels de la santé ont donc eu du mal à les intégrer à leur pratique. De son côté, l'intelligence artificielle gagne en importance. Cette technologie devrait accroître l'efficacité tout en libérant du temps pour les professionnels de la santé, qui pourront ainsi se concentrer sur l'aspect humain des soins, notamment en établissant une relation de confiance et en faisant preuve d'empathie et de compassion envers les patients. Le présent article s'intéresse à l'idée d'un système de soins de santé efficace, qui repose sur l'empathie et la confiance, et à laquelle adhèrent les adeptes de l'intelligence artificielle. Il suggère que l'intelligence artificielle a le potentiel nécessaire pour transformer radicalement la manière dont l'empathie, la compassion et la confiance sont considérées et appliquées aujourd'hui dans le secteur de la santé. À l'avenir, il est essentiel de réexaminer l'importance de ces valeurs et la façon dont elles pourraient être incorporées et mises en œuvre dans un système de santé où l'intelligence artificielle devient peu à peu incontournable. Et surtout, la société a besoin de se demander quel modèle de soins de santé elle souhaite promouvoir.


La empatía, la compasión y la confianza son valores fundamentales de un modelo relacional de atención sanitaria centrado en el paciente. En los últimos años, la búsqueda de una mayor eficiencia en la atención sanitaria, incluida la eficiencia económica, ha dado lugar con frecuencia a que estos valores se vean relegados a un segundo plano, lo que dificulta que los profesionales sanitarios los incorporen en la práctica. La inteligencia artificial se utiliza cada vez más en la atención sanitaria. Esta tecnología promete una mayor eficiencia y más tiempo libre para que los profesionales sanitarios se centren en el lado humano de la atención, lo que incluye el fomento de las relaciones de confianza y el trato a los pacientes con empatía y compasión. En este artículo se examina la visión de una atención sanitaria eficiente, empática y confiable que proponen los defensores de la inteligencia artificial. El artículo sugiere que la inteligencia artificial tiene el potencial de alterar fundamentalmente la forma en que la empatía, la compasión y la confianza se consideran y practican actualmente en la atención sanitaria. Para avanzar, es importante volver a evaluar si dichos valores se podrían incorporar y practicar en un sistema de atención sanitaria en el que se utiliza cada vez más la inteligencia artificial, y de qué manera. Lo más importante es que la sociedad debe reconsiderar qué tipo de atención sanitaria debe promover.


Assuntos
Inteligência Artificial , Atenção à Saúde , Empatia , Confiança , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...